Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available June 25, 2026
-
Free, publicly-accessible full text available February 10, 2026
-
Summary Randomized experiments have been the gold standard for drawing causal inference. The conventional model-based approach has been one of the most popular methods of analysing treatment effects from randomized experiments, which is often carried out through inference for certain model parameters. In this paper, we provide a systematic investigation of model-based analyses for treatment effects under the randomization-based inference framework. This framework does not impose any distributional assumptions on the outcomes, covariates and their dependence, and utilizes only randomization as the reasoned basis. We first derive the asymptotic theory for $ Z $-estimation in completely randomized experiments, and propose sandwich-type conservative covariance estimation. We then apply the developed theory to analyse both average and individual treatment effects in randomized experiments. For the average treatment effect, we consider model-based, model-imputed and model-assisted estimation strategies, where the first two strategies can be sensitive to model misspecification or require specific methods for parameter estimation. The model-assisted approach is robust to arbitrary model misspecification and always provides consistent average treatment effect estimation. We propose optimal ways to conduct model-assisted estimation using generally nonlinear least squares for parameter estimation. For the individual treatment effects, we propose directly modelling the relationship between individual effects and covariates, and discuss the model’s identifiability, inference and interpretation allowing for model misspecification.more » « lessFree, publicly-accessible full text available January 20, 2026
-
Abstract Neyman’s seminal work in 1923 has been a milestone in statistics over the century, which has motivated many fundamental statistical concepts and methodology. In this review, we delve into Neyman’s groundbreaking contribution and offer technical insights into the design and analysis of randomized experiments. We shall review the basic setup of completely randomized experiments and the classical approaches for inferring the average treatment effects. We shall, in particular, review more efficient design and analysis of randomized experiments by utilizing pretreatment covariates, which move beyond Neyman’s original work without involving any covariate. We then summarize several technical ingredients regarding randomizations and permutations that have been developed over the century, such as permutational central limit theorems and Berry–Esseen bounds, and we elaborate on how these technical results facilitate the understanding of randomized experiments. The discussion is also extended to other randomized experiments including rerandomization, stratified randomized experiments, matched pair experiments, and cluster randomized experiments.more » « less
-
Abstract Objective.This paper introduces a novel PET imaging methodology called 3-dimensional positron imaging (3Dπ), which integrates total-body coverage, time-of-flight (TOF) technology, ultra-low dose imaging capabilities, and ultra-fast readout electronics inspired by emerging technology from the DarkSide collaboration.Approach.The study evaluates the performance of 3Dπusing Monte Carlo simulations based on NEMA NU 2-2018 protocols. The methodology employs a homogenous, monolithic scintillator composed of liquid argon (LAr) doped with xenon (Xe) with silicon photomultipliers (SiPMs) operating at cryogenic temperatures.Main results.Substantial improvements in system performance are observed, with the 3Dπsystem achieving a noise equivalent count rate of 3.2 Mcps at 17.3 kBq ml−1, continuing to increase up to 4.3 Mcps at 40 kBq ml−1. Spatial resolution measurements show an average FWHM of 2.7 mm across both axial positions. The system exhibits superior sensitivity, with values reaching 373 kcps MBq−1with a line source at the center of the field of view. Additionally, 3Dπachieves a TOF resolution of 151 ps at 5.3 kBq ml−1, highlighting its potential to produce high-quality images with reduced noise levels.Significance.The study underscores the potential of 3Dπin improving PET imaging performance, offering the potential for shorter scan times and reduced radiation exposure for patients. The Xe-doped LAr offers advantages such as fast scintillation, enhanced light yield, and cost-effectiveness. Future research will focus on optimizing system geometry and further refining reconstruction algorithms to exploit the strengths of 3Dπfor clinical applications.more » « lessFree, publicly-accessible full text available March 12, 2026
-
Abstract Randomization inference is a powerful tool in early phase vaccine trials when estimating the causal effect of a regimen against a placebo or another regimen. Randomization-based inference often focuses on testing either Fisher’s sharp null hypothesis of no treatment effect for any participant or Neyman’s weak null hypothesis of no sample average treatment effect. Many recent efforts have explored conducting exact randomization-based inference for other summaries of the treatment effect profile, for instance, quantiles of the treatment effect distribution function. In this article, we systematically review methods that conduct exact, randomization-based inference for quantiles of individual treatment effects (ITEs) and extend some results to a special case where naïve participants are expected not to exhibit responses to highly specific endpoints. These methods are suitable for completely randomized trials, stratified completely randomized trials, and a matched study comparing two non-randomized arms from possibly different trials. We evaluate the usefulness of these methods using synthetic data in simulation studies. Finally, we apply these methods to HIV Vaccine Trials Network Study 086 (HVTN 086) and HVTN 205 and showcase a wide range of application scenarios of the methods.Rcode that replicates all analyses in this article can be found in first author’s GitHub page athttps://github.com/Zhe-Chen-1999/ITE-Inference.more » « less
-
The U.S. Government is developing a package label to help consumers access reliable security and privacy information about Internet of Things (IoT) devices when making purchase decisions. The label will include the U.S. Cyber Trust Mark, a QR code to scan for more details, and potentially additional information. To examine how label information complexity and educational interventions affect comprehension of security and privacy attributes and label QR code use, we conducted an online survey with 518 IoT purchasers. We examined participants’ comprehension and preferences for three labels of varying complexities, with and without an educational intervention. Participants favored and correctly utilized the two higher-complexity labels, showing a special interest in the privacy-relevant content. Furthermore, while the educational intervention improved understanding of the QR code’s purpose, it had a modest effect on QR scanning behavior. We highlight clear design and policy directions for creating and deploying IoT security and privacy labels.more » « less
-
Abstract Randomisation inference (RI) is typically interpreted as testing Fisher’s ‘sharp’ null hypothesis that all unit-level effects are exactly zero. This hypothesis is often criticised as restrictive and implausible, making its rejection scientifically uninteresting. We show, however, that many randomisation tests are also valid for a ‘bounded’ null hypothesis under which the unit-level effects are all non-positive (or all non-negative) but are otherwise heterogeneous. In addition to being more plausible a priori, bounded nulls are closely related to substantively important concepts such as monotonicity and Pareto efficiency. Reinterpreting RI in this way expands the range of inferences possible in this framework. We show that exact confidence intervals for the maximum (or minimum) unit-level effect can be obtained by inverting tests for a sequence of bounded nulls. We also generalise RI to cover inference for quantiles of the individual effect distribution as well as for the proportion of individual effects larger (or smaller) than a given threshold. The proposed confidence intervals for all effect quantiles are simultaneously valid, in the sense that no correction for multiple analyses is required. In sum, our reinterpretation and generalisation provide a broader justification for randomisation tests and a basis for exact non-parametric inference for effect quantiles.more » « less
-
Summary Power analyses are an important aspect of experimental design, because they help determine how experiments are implemented in practice. It is common to specify a desired level of power and compute the sample size necessary to obtain that power. Such calculations are well known for completely randomized experiments, but there can be many benefits to using other experimental designs. For example, it has recently been established that rerandomization, where subjects are randomized until covariate balance is obtained, increases the precision of causal effect estimators. This work establishes the power of rerandomized treatment-control experiments, thereby allowing for sample size calculators. We find the surprising result that, while power is often greater under rerandomization than complete randomization, the opposite can occur for very small treatment effects. The reason is that inference under rerandomization can be relatively more conservative, in the sense that it can have a lower Type-I error at the same nominal significance level, and this additional conservativeness adversely affects power. This surprising result is due to treatment effect heterogeneity, a quantity often ignored in power analyses. We find that heterogeneity increases power for large effect sizes, but decreases power for small effect sizes.more » « less
An official website of the United States government
